Search results for "Stochastic approximat"

showing 10 items of 12 documents

On the stability of some controlled Markov chains and its applications to stochastic approximation with Markovian dynamic

2015

We develop a practical approach to establish the stability, that is, the recurrence in a given set, of a large class of controlled Markov chains. These processes arise in various areas of applied science and encompass important numerical methods. We show in particular how individual Lyapunov functions and associated drift conditions for the parametrized family of Markov transition probabilities and the parameter update can be combined to form Lyapunov functions for the joint process, leading to the proof of the desired stability property. Of particular interest is the fact that the approach applies even in situations where the two components of the process present a time-scale separation, w…

65C05FOS: Computer and information sciencesStatistics and ProbabilityLyapunov functionStability (learning theory)Markov processContext (language use)Mathematics - Statistics Theorycontrolled Markov chainsStatistics Theory (math.ST)Stochastic approximation01 natural sciencesMethodology (stat.ME)010104 statistics & probabilitysymbols.namesake60J05stochastic approximationFOS: MathematicsComputational statisticsApplied mathematics60J220101 mathematicsStatistics - MethodologyMathematicsSequenceMarkov chain010102 general mathematicsStability Markov chainssymbolsStatistics Probability and Uncertaintyadaptive Markov chain Monte Carlo
researchProduct

Convergence of Markovian Stochastic Approximation with discontinuous dynamics

2016

This paper is devoted to the convergence analysis of stochastic approximation algorithms of the form $\theta_{n+1} = \theta_n + \gamma_{n+1} H_{\theta_n}({X_{n+1}})$, where ${\left\{ {\theta}_n, n \in {\mathbb{N}} \right\}}$ is an ${\mathbb{R}}^d$-valued sequence, ${\left\{ {\gamma}_n, n \in {\mathbb{N}} \right\}}$ is a deterministic stepsize sequence, and ${\left\{ {X}_n, n \in {\mathbb{N}} \right\}}$ is a controlled Markov chain. We study the convergence under weak assumptions on smoothness-in-$\theta$ of the function $\theta \mapsto H_{\theta}({x})$. It is usually assumed that this function is continuous for any $x$; in this work, we relax this condition. Our results are illustrated by c…

Control and OptimizationStochastic approximationMarkov processMathematics - Statistics Theorydiscontinuous dynamicsStatistics Theory (math.ST)Stochastic approximation01 natural sciencesCombinatorics010104 statistics & probabilitysymbols.namesake[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]Convergence (routing)FOS: Mathematics0101 mathematics62L20state-dependent noiseComputingMilieux_MISCELLANEOUSMathematicsta112SequenceconvergenceApplied Mathematicsta111010102 general mathematicsFunction (mathematics)[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]16. Peace & justice[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulationcontrolled Markov chainMarkovian stochastic approximationsymbolsStochastic approximat
researchProduct

A novel technique for stochastic root-finding: Enhancing the search with adaptive d-ary search

2017

The most fundamental problem encountered in the field of stochastic optimization, is the Stochastic Root Finding (SRF) problem where the task is to locate an unknown point x∗ for which g(x∗) = 0 for a given function g that can only be observed in the presence of noise [15]. The vast majority of the state-of-the-art solutions to the SRF problem involve the theory of stochastic approximation. The premise of the latter family of algorithms is to oper ate by means of so-called “small-step”processesthat explorethe search space in a conservative manner. Using this paradigm, the point investigated at any time instant is in the proximity of the point investigated at the previous time instant, render…

Mathematical optimizationStochastic point location problemsInformation Systems and ManagementLearning automataComputer scienceStochastic root finding problemsLearning Automata020206 networking & telecommunications02 engineering and technologyInterval (mathematics)Function (mathematics)Stochastic approximationComputer Science ApplicationsTheoretical Computer ScienceArtificial IntelligenceControl and Systems Engineering0202 electrical engineering electronic engineering information engineeringSearch problem020201 artificial intelligence & image processingStochastic optimizationAlgorithmRoot-finding algorithmSoftwareInformation Sciences
researchProduct

Stochastic Approximation for Multivariate and Functional Median

2010

We propose a very simple algorithm in order to estimate the geometric median, also called spatial median, of multivariate (Small (1990)) or functional data (Gervini (2008)) when the sample size is large. A simple and fast iterative approach based on the Robbins-Monro algorithm (Duflo (1997)) as well as its averaged version (Polyak and Juditsky (1992)) are shown to be effective for large samples of high dimension data. They are very fast and only require O(Nd) elementary operations, where N is the sample size and d is the dimension of data. The averaged approach is shown to be more effective and less sensitive to the tuning parameter. The ability of this new estimator to estimate accurately …

Multivariate statisticsDimension (vector space)Sample size determinationRobustness (computer science)StatisticsApplied mathematicsEstimatorGeometric medianStochastic approximationSIMPLE algorithmMathematics
researchProduct

Computation of the Multivariate Oja Median

2003

The multivariate Oja median (Oja, 1983) is an affine equivariant multivariate location estimate with high efficiency. This estimate has a bounded influence function but zero breakdown. The computation of the estimate appears to be highly intensive. We consider different, exact and stochastic, algorithms for the calculation of the value of the estimate. In the stochastic algorithms, the gradient of the objective function, the rank function, is estimated by sampling observation. hyperplanes. The estimated rank function with its estimated accuracy then yields a confidence region for the true sample Oja median, and the confidence region shrinks to the sample median with the increasing number of…

Multivariate statisticsHyperplaneRank (linear algebra)Bounded functionStatisticsApplied mathematicsFunction (mathematics)Stochastic approximationTime complexityConfidence regionMathematics
researchProduct

Quantitative approximation of certain stochastic integrals

2002

We approximate certain stochastic integrals, typically appearing in Stochastic Finance, by stochastic integrals over integrands, which are path-wise constant within deterministic, but not necessarily equidistant, time intervals. We ask for rates of convergence if the approximation error is considered in L 2 . In particular, we show that by using non-equidistant time nets, in contrast to equidistant time nets, approximation rates can be improved considerably.

Physics::Computational PhysicsMeasurable functionRate of convergenceApproximation errorPath integral formulationMathematical analysisEquidistantStochastic approximationConstant (mathematics)Brownian motionMathematicsStochastics and Stochastic Reports
researchProduct

Small-sample characterization of stochastic approximation staircases in forced-choice adaptive threshold estimation

2007

Despite the widespread use of up—down staircases in adaptive threshold estimation, their efficiency and usability in forced-choice experiments has been recently debated. In this study, simulation techniques were used to determine the small-sample convergence properties of stochastic approximation (SA) staircases as a function of several experimental parameters. We found that satisfying some general requirements (use of the accelerated SA algorithm, clear suprathreshold initial stimulus intensity, large initial step size) the convergence was accurate independently of the spread of the underlying psychometric function. SA staircases were also reliable for targeting percent-correct levels far …

Psychology (all)Computer scienceCoercionSensationDifferential ThresholdExperimental and Cognitive PsychologyStochastic approximationMidpointChoice BehaviorPsychometric functionSensory thresholdPsychophysicsHumansPsychologyDecision making Psychophysical procedures Psychometric functionGeneral PsychologyStochastic ProcessesPsychophysical proceduresStochastic processTwo-alternative forced choicebusiness.industryUsabilitySensory SystemsSettore ING-INF/06 - Bioingegneria Elettronica E InformaticaPsychometric functionSensory SystembusinessAlgorithmDecision making
researchProduct

Interpolation and approximation in L2(γ)

AbstractAssume a standard Brownian motion W=(Wt)t∈[0,1], a Borel function f:R→R such that f(W1)∈L2, and the standard Gaussian measure γ on the real line. We characterize that f belongs to the Besov space B2,qθ(γ)≔(L2(γ),D1,2(γ))θ,q, obtained via the real interpolation method, by the behavior of aX(f(X1);τ)≔∥f(W1)-PXτf(W1)∥L2, where τ=(ti)i=0n is a deterministic time net and PXτ:L2→L2 the orthogonal projection onto a subspace of ‘discrete’ stochastic integrals x0+∑i=1nvi-1(Xti-Xti-1) with X being the Brownian motion or the geometric Brownian motion. By using Hermite polynomial expansions the problem is reduced to a deterministic one. The approximation numbers aX(f(X1);τ) can be used to descr…

Real interpolationStochastic approximationBesov spacesJournal of Approximation Theory
researchProduct

A fast and recursive algorithm for clustering large datasets with k-medians

2012

Clustering with fast algorithms large samples of high dimensional data is an important challenge in computational statistics. Borrowing ideas from MacQueen (1967) who introduced a sequential version of the $k$-means algorithm, a new class of recursive stochastic gradient algorithms designed for the $k$-medians loss criterion is proposed. By their recursive nature, these algorithms are very fast and are well adapted to deal with large samples of data that are allowed to arrive sequentially. It is proved that the stochastic gradient algorithm converges almost surely to the set of stationary points of the underlying loss criterion. A particular attention is paid to the averaged versions, which…

Statistics and ProbabilityClustering high-dimensional dataFOS: Computer and information sciencesMathematical optimizationhigh dimensional dataMachine Learning (stat.ML)02 engineering and technologyStochastic approximation01 natural sciencesStatistics - Computation010104 statistics & probabilityk-medoidsStatistics - Machine Learning[MATH.MATH-ST]Mathematics [math]/Statistics [math.ST]stochastic approximation0202 electrical engineering electronic engineering information engineeringComputational statisticsrecursive estimatorsAlmost surely[ MATH.MATH-ST ] Mathematics [math]/Statistics [math.ST]0101 mathematicsCluster analysisComputation (stat.CO)Mathematicsaveragingk-medoidsRobbins MonroApplied MathematicsEstimator[STAT.TH]Statistics [stat]/Statistics Theory [stat.TH]stochastic gradient[ STAT.TH ] Statistics [stat]/Statistics Theory [stat.TH]MedoidComputational MathematicsComputational Theory and Mathematicsonline clustering020201 artificial intelligence & image processingpartitioning around medoidsAlgorithm
researchProduct

Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?

2011

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …

Statistics and ProbabilityFOS: Computer and information sciencesIdentity matrixMathematics - Statistics TheoryStatistics Theory (math.ST)Upper and lower boundsStatistics - Computation93E3593E15Combinatorics60J27Mathematics::ProbabilityLaw of large numbers65C40 60J27 93E15 93E35stochastic approximationFOS: MathematicsEigenvalues and eigenvectorsComputation (stat.CO)Metropolis algorithmMathematicsProbability (math.PR)Zero (complex analysis)CovariancestabilityUniform continuityBounded function65C40Statistics Probability and Uncertaintyadaptive Markov chain Monte CarloMathematics - Probability
researchProduct